64 research outputs found

    Informative Path Planning for Active Field Mapping under Localization Uncertainty

    Full text link
    Information gathering algorithms play a key role in unlocking the potential of robots for efficient data collection in a wide range of applications. However, most existing strategies neglect the fundamental problem of the robot pose uncertainty, which is an implicit requirement for creating robust, high-quality maps. To address this issue, we introduce an informative planning framework for active mapping that explicitly accounts for the pose uncertainty in both the mapping and planning tasks. Our strategy exploits a Gaussian Process (GP) model to capture a target environmental field given the uncertainty on its inputs. For planning, we formulate a new utility function that couples the localization and field mapping objectives in GP-based mapping scenarios in a principled way, without relying on any manually tuned parameters. Extensive simulations show that our approach outperforms existing strategies, with reductions in mean pose uncertainty and map error. We also present a proof of concept in an indoor temperature mapping scenario.Comment: 8 pages, 7 figures, submission (revised) to Robotics & Automation Letters (and IEEE International Conference on Robotics and Automation

    Visual Navigation in Unknown Environments

    Get PDF
    Navigation in mobile robotics involves two tasks, keeping track of the robot's position and moving according to a control strategy. In addition, when no prior knowledge of the environment is available, the problem is even more difficult, as the robot has to build a map of its surroundings as it moves. These three problems ought to be solved in conjunction since they depend on each other. This thesis is about simultaneously controlling an autonomous vehicle, estimating its location and building the map of the environment. The main objective is to analyse the problem from a control theoretical perspective based on the EKF-SLAM implementation. The contribution of this thesis is the analysis of system's properties such as observability, controllability and stability, which allow us to propose an appropriate navigation scheme that produces well-behaved estimators, controllers, and consequently, the system as a whole. We present a steady state analysis of the SLAM problem, identifying the conditions that lead to partial observability. It is shown that the effects of partial observability appear even in the ideal linear Gaussian case. This indicates that linearisation alone is not the only cause of SLAM inconsistency, and that observability must be achieved as a prerequisite to tackling the effects of linearisation. Additionally, full observability is also shown to be necessary during diagonalisation of the covariance matrix, an approach often used to reduce the computational complexity of the SLAM algorithm, and which leads to full controllability as we show in this work.Focusing specifically on the case of a system with a single monocular camera, we present an observability analysis using the nullspace basis of the stripped observability matrix. The aim is to get a better understanding of the well known intuitive behaviour of this type of systems, such as the need for triangulation to features from different positions in order to get accurate relative pose estimates between vehicle and camera. Through characterisation the unobservable directions in monocular SLAM, we are able to identify the vehicle motions required to maximise the number of observable states in the system. When closing the control loop of the SLAM system, both the feedback controller and the estimator are shown to be asymptotically stable. Furthermore, we show that the tracking error does not influence the estimation performance of a fully observable system and viceversa, that control is not affected by the estimation. Because of this, a higher level motion strategy is required in order to enhance estimation, specially needed while performing SLAM with a single camera. Considering a real-time application, we propose a control strategy to optimise both the localisation of the vehicle and the feature map by computing the most appropriate control actions or movements. The actions are chosen in order to maximise an information theoretic metric. Simulations and real-time experiments are performed to demonstrate the feasibility of the proposed control strategy

    Selective combination of visual and thermal imaging for resilient localization in adverse conditions: Day and night, smoke and fire

    Get PDF
    Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions

    Estimator stability analysis in SLAM

    Get PDF
    IFAC Symposium on Intelligent Autonomous Vehicles (IAV), 2004, Lisboa (Portugal)This work presents an analysis of the state estimation error dynamics for a linear system within the Kalman filter based approach to Simultaneous Localization and Map Building. Our objective is to demonstrate that such dynamics is marginally stable. The paper also presents the necessary modifications required in the observation model, in order to guarantee zero mean stable error dynamics. Simulations for a one-dimensional robot and a planar vehicle are presented.This work was supported by the project 'Supervised learning of industrial scenes by means of an active vision equipped mobile robot.' (J-00063).Peer Reviewe

    Stochastic State Estimation for Simultaneous Localization and Map Building in Mobile Robotics

    Get PDF
    En Cutting Edge Robotics, 223-242. Advanced Robotic Systems Press, 2005.The study of stochastic models for Simultaneous Localization and Map Building (SLAM) in mobile robotics has been an active research topic for over fifteen years. Within the Kalman filter (KF) approach to SLAM, seminal work (Smith and Cheeseman, 1986) suggested that as successive landmark observations take place, the correlation between the estimates of the location of such landmarks in a map grows continuously. This observation was later ratified (Dissanayake et al., 2001) with a proof showing that the estimated map converges monotonically to a relative map with zero uncertainty. They also showed how the absolute accuracy of the map reaches a lower bound defined only by the initial vehicle uncertainty, and proved it for a one-landmark vehicle with no process noise. From an estimation theoretic point of view, we address these results as a consequence of partial observability. We show that error free reconstruction of the map state vector is not possible with typical measurement models, regardless of the vehicle model chosen, and show experimentally that the expected error in state estimation is proportional to the number of landmarks used. Error free reconstruction is only possible once full observability is guaranteed.This work was supported by projects: 'Supervised learning of industrial scenes by means of an active vision equipped mobile robot.' (J-00063), 'Integration of robust perception, learning, and navigation systems in mobile robotics' (J-0929).Peer Reviewe

    Accurate Gaussian Process Distance Fields with applications to Echolocation and Mapping

    Full text link
    This paper introduces a novel method to estimate distance fields from noisy point clouds using Gaussian Process (GP) regression. Distance fields, or distance functions, gained popularity for applications like point cloud registration, odometry, SLAM, path planning, shape reconstruction, etc. A distance field provides a continuous representation of the scene. It is defined as the shortest distance from any query point and the closest surface. The key concept of the proposed method is a reverting function used to turn a GP-inferred occupancy field into an accurate distance field. The reverting function is specific to the chosen GP kernel. This paper provides the theoretical derivation of the proposed method and its relationship to existing techniques. The improved accuracy compared with existing distance fields is demonstrated with simulated experiments. The level of accuracy of the proposed approach enables novel applications that rely on precise distance estimation. This work presents echolocation and mapping frameworks for ultrasonic-guided wave sensing in metallic structures. These methods leverage the proposed distance field with a physics-based measurement model accounting for the propagation of the ultrasonic waves in the material. Real-world experiments are conducted to demonstrate the soundness of these frameworks
    • …
    corecore